What We'll Cover
AI writing tools are everywhere, and they range from general-purpose chatbots to purpose-built academic writing assistants. This session maps the landscape, honestly evaluates what each category offers, and confronts two critical questions: Can AI help level the playing field for non-native English speakers? And does that help come at the cost of linguistic and cultural diversity?
We will also look at the evidence for whether these tools actually improve writing quality and efficiency, rather than just assuming they do. The answers are more nuanced than most marketing materials suggest.
🗂️ The Four Categories of AI Writing Tools
AI writing tools fall into four broad categories. Understanding which category a tool belongs to helps you set realistic expectations for what it will and will not do for your writing.
General-Purpose LLMs
Tools: Claude, ChatGPT, Gemini
The most versatile category. These can brainstorm ideas, give feedback on drafts, restructure arguments, explain concepts, and help with everything from abstracts to cover letters. They are not designed specifically for academic writing, which means they lack specialist features like citation management or journal-specific style checking — but their flexibility is their strength.
- Best for brainstorming, feedback, restructuring, and overcoming writer's block
- Can adapt to almost any writing task with good prompting
- Free tiers available for all three (with usage limits)
- No built-in citation tools or academic style enforcement
- Quality of output depends heavily on how well you prompt
Specialist Academic Writing Tools
Tools: Paperpal, Jenni AI, Paperguide, Yomu
Designed specifically for academic contexts. These tools offer features like citation integration, academic style checking, paraphrasing with source awareness, and journal-specific formatting. They understand the conventions of academic writing in ways general LLMs do not, but their narrower focus means they are less useful outside their intended domain.
- Academic-aware: understand citation formats, section conventions, hedging language
- Some integrate directly with reference managers and databases
- Mostly paid, typically $10–20 per month for full features
- Less flexible than general LLMs for non-standard tasks
- Quality varies significantly between tools — test before committing
Grammar and Style Tools
Tools: Grammarly, ProWritingAid, LanguageTool
Focused on surface-level corrections: spelling, grammar, punctuation, sentence clarity, and style consistency. These tools are good at what they do, but they fundamentally do not touch content. They will fix your comma splices and flag passive voice, but they will not tell you that your argument is weak or your literature review is missing a key source.
- Excellent for catching mechanical errors and improving readability
- Free tiers cover basic grammar; paid tiers add style, tone, and clarity suggestions
- LanguageTool is open-source and supports many languages
- Will not evaluate or improve the substance of your argument
- Can be overly prescriptive — not every suggestion is an improvement
Translation Tools
Tools: DeepL, Google Translate
Critical for multilingual researchers working across language boundaries. DeepL is generally considered superior for academic and technical text, producing more natural-sounding translations that preserve nuance better than Google Translate. Both are free for basic use, with paid tiers removing length limits and adding specialised features.
- DeepL generally produces more natural academic prose than Google Translate
- Invaluable for reading sources in other languages and for multilingual researchers drafting in English
- Free tiers have character limits; paid tiers are relatively affordable
- Neither is perfect — domain-specific terminology still needs expert review
- Translation quality varies by language pair (European languages tend to be strongest)
📌 A Note on Overlap
These categories are not watertight. General-purpose LLMs can do grammar checking. Specialist tools often include translation features. Grammarly is adding AI writing assistance. The trend is convergence — every tool wants to be your everything. But understanding the core category helps you evaluate whether a tool's peripheral features are genuinely good or just checkbox items on a marketing page.
🌍 The Multilingual Equity Angle
One of the most important — and genuinely hopeful — aspects of AI writing tools is their potential to reduce the disadvantage faced by researchers who write in English as a second, third, or fourth language. But this potential comes with a significant catch.
The Scale of the Problem
Research consistently shows that non-native English speakers spend significantly more time on writing tasks than their native-speaking peers. A study by Amano et al. (2023) in PLOS Biology found non-native speakers spend approximately 50% more time on writing tasks. This is not a matter of intelligence or research quality — it is a structural disadvantage baked into a publishing system that operates overwhelmingly in English.
For researchers at UCT and across Africa, where English may be a second, third, or fourth language for many scholars, this time penalty is not trivial. It means fewer publications, slower career progression, and less time for the research itself. Any tool that genuinely reduces this gap is worth taking seriously.
Two Possible Futures for Multilingual Academic Publishing
A 2025 paper in PLOS Biology by Amano, Bowker & Burton-Jones presents a thought-provoking framework for how AI translation could reshape academic publishing. The paper identifies two divergent futures:
Future 1: English Reinforced
Everyone uses AI to write in English. The tools get good enough that non-native speakers can produce polished English text with minimal effort. This is the path of least resistance, and it is what most current tools optimise for.
The problem: This reinforces English hegemony. Knowledge continues to flow in one direction. Researchers must still operate in English to be heard, and their own languages remain marginal in academic discourse.
Future 2: Multilingual Access
Researchers write in their own language, and readers use AI to read in theirs. Publishing becomes genuinely multilingual because the translation barrier drops for both writers and readers.
The promise: This promotes linguistic diversity and allows researchers to think and write in the language most natural to them. Ideas flow in multiple directions. Academic knowledge becomes accessible in many languages, not just English.
Source: Amano, Bowker & Burton-Jones (2025). AI-mediated translation presents two possible futures for academic publishing in a multilingual world. PLOS Biology.
What Does This Mean for Africa?
The African context makes this question especially urgent. Across the continent, researchers navigate dozens of languages in their daily lives. When a Zulu-speaking researcher at UCT must write in English, the time penalty is real and the cognitive cost is significant.
AI writing tools could be genuinely democratising here. A tool that helps a researcher express complex ideas in polished academic English — ideas they can think through perfectly well in isiXhosa or Afrikaans — removes a barrier that has nothing to do with research quality.
But we must ask: are we helping researchers communicate, or are we erasing the linguistic diversity that enriches academic thought? The answer depends on which future we build toward.
📌 Further Reading on Language Equity
A ResearchGate study, "Beyond English Hegemony: AI Academic Writing Tool Usage Among Non-Native English Speakers," examines how non-native English speakers are actually using these tools in practice and what patterns emerge in adoption across different linguistic communities. See the Readings section below for the full reference.
⚠️ The Homogenisation Problem
If the multilingual equity angle is the hopeful side of AI writing tools, the homogenisation problem is the cautionary one. There is growing evidence that AI writing assistance does not just help people write — it shapes how they write, and that shaping tends to push everyone toward the same style.
When AI Suggestions Flatten Cultural Differences
A 2025 study presented at CHI (the premier conference on human-computer interaction) directly tested what happens when people from different cultural backgrounds use an AI writing assistant. The findings are striking.
Researchers at Cornell had both Indian and American participants complete writing tasks with and without AI assistance. Two key findings emerged:
- Indian writing became more American-sounding when participants used AI suggestions. The AI's training data skewed toward Western English styles, and its suggestions pushed Indian writers away from their natural expression patterns.
- Indian participants got a smaller productivity boost compared to Americans. Why? Because they spent more time evaluating and correcting AI suggestions that did not match their intended meaning or style. The AI was optimising for a style that was not theirs.
Source: Agarwal, Naaman & Vashistha (2025). AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances. CHI 2025. Also available on arXiv.
⚠️ The Core Tension
This is the fundamental tension in AI writing assistance: the same tools that help non-native English speakers access academic publishing may simultaneously erode the diversity of expression that makes academic discourse rich. AI helps with access, but it can come at the cost of linguistic and cultural variety. A world where every paper reads the same — regardless of whether the author is in Cape Town, Mumbai, or Boston — is not a richer world. It is a flatter one.
The Broader Homogenisation Concern
The writing homogenisation finding is part of a larger pattern. Daryani et al. (2026) argue in their paper "The Homogenizing Engine" that AI's standardising effects extend well beyond writing. When AI tools are trained on dominant cultural datasets and deployed globally, they tend to compress the variation that makes human expression diverse. In writing, this means less stylistic variety. In research, it could mean less diversity of thought.
This is not an argument against using AI writing tools. It is an argument for using them consciously — knowing that the suggestions they make carry implicit assumptions about what "good writing" looks like, and that those assumptions are not culturally neutral.
Source: Daryani et al. (2026). The Homogenizing Engine: AI's Role in Standardizing Culture. SAGE Journals.
📊 What the Evidence Says About Effectiveness
Do AI writing tools actually make academic writing better and faster? The evidence is surprisingly thin, but what exists is encouraging — with an important caveat.
The Carnegie Mellon Study
A study by Usdan, Connell Pensky & Chang (2024) at Carnegie Mellon University examined what happened when graduate students were given structured instruction on using AI writing tools for their academic work. The results were notable:
- Writing time reduced by approximately 65% — tasks that previously took hours were completed significantly faster with AI assistance.
- Average grades improved from B+ to A — the quality of the writing produced with AI assistance was rated higher by evaluators.
- Benefits were consistent for both native and ESL writers — non-native English speakers saw similar improvements to their native-speaking peers.
⚠️ "Proper Instruction" Is Doing a Lot of Work
The Carnegie Mellon results come with a crucial qualifier: participants received structured training on how to use AI writing tools effectively. They were taught specific prompting strategies, shown how to iterate on AI output, and given frameworks for maintaining their own voice while leveraging AI assistance.
This matters because untrained use of AI writing tools may not produce the same results. The difference between a researcher who pastes their draft into ChatGPT and says "make this better" and one who uses structured prompting strategies to get targeted feedback on specific aspects of their argument is enormous. The tool is the same; the outcome is not.
This is, incidentally, exactly why this course exists. The tool matters less than knowing how to use it.
The Practical Takeaway
AI writing tools can genuinely improve both the speed and quality of academic writing — but only when used with intention and skill. Dropping your draft into an AI and accepting whatever comes back is unlikely to produce good results. Using AI as a structured collaborator — for brainstorming, feedback, restructuring, and polishing — can produce excellent results.
The skills you are developing in this course — effective prompting, critical evaluation of AI output, iterative refinement — are what separate productive AI-assisted writing from lazy AI-dependent writing. The tool amplifies whatever approach you bring to it.
📋 Tool Comparison
Here is a practical comparison of the most commonly used AI writing tools. Note that pricing and features change frequently — check the tool's website for current details.
| Tool | Category | Best For | Free Tier? | Pricing | Key Limitation |
|---|---|---|---|---|---|
| Claude | General-purpose LLM | Detailed feedback, restructuring arguments, nuanced writing assistance | Yes (usage limits) | Free / $20/mo Pro | No citation management or academic style enforcement |
| ChatGPT | General-purpose LLM | Brainstorming, drafting, general writing feedback | Yes (GPT-4o limited) | Free / $20/mo Plus | Can be overly agreeable; no built-in academic features |
| Gemini | General-purpose LLM | Research-connected writing with Google integration | Yes | Free / $20/mo Advanced | Newer to the space; academic writing quality still catching up |
| Paperpal | Specialist academic | Language polishing for non-native English speakers, journal submission prep | Limited | ~$12/mo | Narrow focus on language; less useful for content-level feedback |
| Jenni AI | Specialist academic | Long-form academic writing with inline citation support | Limited (200 words/day) | ~$20/mo | Citation suggestions need verification; can encourage over-reliance |
| Grammarly | Grammar and style | Catching mechanical errors, improving clarity and readability | Yes (basic grammar) | ~$12/mo Premium | Surface-level only; does not evaluate argument quality or content |
| DeepL | Translation | Translating academic text between languages; reading foreign-language sources | Yes (character limits) | ~$9/mo Pro | Strongest for European languages; less reliable for African languages |
| LanguageTool | Grammar and style | Grammar checking with strong multilingual support; open-source option | Yes (basic) | ~$5/mo Premium | Less sophisticated AI suggestions than Grammarly; best for grammar only |
📌 On Pricing
All prices shown are approximate as of early 2026 and are given in US dollars. Prices change frequently, and many tools offer student discounts or institutional pricing. Before paying individually, check whether UCT has institutional access — your faculty librarian is the best first point of contact.
📚 Readings for This Session
Core Readings
- Amano, Bowker & Burton-Jones (2025). "AI-mediated translation presents two possible futures for academic publishing in a multilingual world." PLOS Biology. Read the paper — A thoughtful analysis of how AI translation could either reinforce or dismantle English hegemony in academic publishing.
- Agarwal, Naaman & Vashistha (2025). "AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances." CHI 2025. Read on arXiv — Empirical evidence that AI writing assistance pushes non-Western writers toward Western expression styles, with implications for cultural diversity in academic writing.
- "Beyond English Hegemony: AI Academic Writing Tool Usage Among Non-Native English Speakers." Read on ResearchGate — Examines patterns in how non-native English speakers adopt and use AI writing tools, and what this reveals about language equity in academia.
Supplementary Readings
- Daryani et al. (2026). "The Homogenizing Engine: AI's Role in Standardizing Culture." SAGE Journals. Read the paper — Broader analysis of how AI standardises cultural expression, including but not limited to writing.
- Explore the free tiers of tools discussed in this session. Try giving the same paragraph to Claude, Grammarly, and Paperpal and compare what each focuses on. The differences are instructive.
Key Takeaways
The landscape is wide but the categories are clear. General-purpose LLMs offer flexibility, specialist tools offer academic awareness, grammar tools fix mechanics, and translation tools bridge languages. Knowing which category you need prevents you from paying for features you will not use.
AI writing tools raise genuine equity questions. For non-native English speakers — including many researchers at UCT and across Africa — these tools can meaningfully reduce a structural disadvantage. This is one of the strongest arguments in their favour.
But homogenisation is real. The same tools that help diverse writers access academic publishing can also flatten the diversity of expression that enriches scholarly discourse. Using AI writing tools consciously means being aware of whose "good writing" the AI was trained to produce.
Evidence supports effectiveness — with training. Structured use of AI writing tools can dramatically improve both speed and quality. Unstructured use may not. The prompting and evaluation skills you are building in this course are what make the difference.
Next session: We turn to the most consequential question in AI-assisted academic writing — scientific integrity. Sub-Lesson 4 examines how AI tools interact with plagiarism, fabrication, and the norms of honest scholarly communication. The stakes are higher than most researchers realise.